Biological Imaging
◐ Cambridge University Press (CUP)
Preprints posted in the last 90 days, ranked by how well they match Biological Imaging's content profile, based on 15 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Ait Laydi, A.; Cueff, L.; Crespo, M.; El Mourabit, Y.; Bouvrais, H.
Show abstract
BackgroundSegmenting cytoskeletal filaments in microscopy images is essential for studying their roles in cellular processes such as cell division and intracellular transport. However, this task is highly challenging due to the fine, densely packed, and intertwined nature of these structures. Imaging limitations--noise, low contrast, and uneven fluorescence--further complicate analysis. While deep learning has advanced segmentation of large, well-defined biological structures, its performance often degrades under such adverse conditions. Additional challenges include obtaining precise annotations for curvilinear structures and managing severe class imbalance during training. ResultsWe introduce a novel noise-adaptive attention mechanism that extends the Squeeze-and-Excitation (SE) module to dynamically adjust to varying noise levels. Integrated into a U-Net decoder with residual encoder blocks, this yields ASE_Res_UNet, a lightweight yet high-performance model. To address annotation challenges, we developed a synthetic dataset generation strategy that ensures accurate annotations of fine filaments in noisy images, producing a synthetic dataset with two difficulty levels for segmentation benchmarking. We systematically evaluated loss functions and metrics to mitigate class imbalance, ensuring robust performance assessment. ASE_Res_UNet effectively segmented microtubules in noisy synthetic images, outperforming its ablated variants. It also demonstrated superior segmentation compared to models with alternative attention mechanisms or distinct architectures, while requiring fewer parameters, making it efficient for resource-constrained environments. Evaluation on a newly curated real microscopy dataset and a recently reannotated dataset highlighted ASE_Res_UNets effectiveness in segmenting microtubules beyond synthetic images. For these datasets, ASE_Res_UNet was competitive with a recent synthetic data-driven approach that shares two cytoskeleton pretrained models. Importantly, ASE_Res_UNet generalised well to other curvilinear structures (blood vessels and nerves) across diverse imaging conditions. ConclusionsThis work advances microtubule segmentation through three key contributions: (1) Providing two benchmark datasets (synthetic and real), addressing a critical gap in standardised evaluation resources for this task; (2) Introducing ASE_Res_UNet, a lightweight yet robust model combining noise-adaptive attention with residual learning; (3) Validating competitive performance across synthetic and real microscopy data. Additionally, we demonstrated generalisation to diverse curvilinear structures, showcasing potential for broader applications in biological research and medical diagnosis.
Lee, R. M.; Eisenman, L. R.; Hobson, C.; Aaron, J. S.; Chew, T.-L.
Show abstract
Motion is an essential component of any living system. It is rich with information, but it is often challenging to quantitatively extract biologically informative results from the motion apparent in microscopy images. This challenge is exacerbated by the wide variety in biological movement, which often takes the form of difficult-to-segment amorphous structures undergoing complex motion. An image processing technique known as optical flow can capture motion at each pixel in an image, thus bypassing the need for object segmentation or a priori definition of motion types. This makes it a powerful tool for quantitative assessment of biological systems from the protein to organism scale. However, despite its flexibility and strengths for analyzing fluorescence microscopy images, its adoption in the bioimaging community has been limited by the availability of easy-to-use tools and guidance in results interpretation. Here we describe an optical flow tool, OpticalFlow3D, that can be run in Python or MATLAB and is compatible with three-dimensional microscopy images. Using biological examples across length scales, we illustrate how OpticalFlow3D can enable new biological insight.
Buhn, N. E.; Adunur, S. R.; Hamilton, J.; Levis, S.; Hagen, G. M.; Ventura, J.
Show abstract
BackgroundLive-cell fluorescence microscopy enables the study of dynamic cellular processes. However, fluorescence microscopy can damage cells and disrupt these dynamic processes through photobleaching and phototoxicity. Reducing light exposure mitigates the effects of photobleaching and phototoxicity but results in low signal-to-noise ratio (SNR) images. Deep learning provides a solution for restoring these low-SNR images. However, these deep learning methods require large, representative datasets for training, testing, and benchmarking, as well as substantial GPU memory, particularly for denoising large images. ResultsWe present a new fluorescence microscopy dataset designed to expand the range of imaging conditions and specimens currently available for evaluating denoising methods. The dataset contains 324 paired high/low-SNR images ranging from four to 282 megapixels across 12 sub-datasets that vary in specimen, objective used, staining type, excitation wavelength, and exposure time. The dataset also includes spinning disk confocal microscopy examples and extreme-noise cases. We evaluated three state-of-the-art deep learning denoising models on the dataset: a supervised transformer-based model, a supervised CNN model, and an unsupervised single image model. We also developed an image stitching method that enables large images to be processed in smaller crops and reconstructed. ConclusionsOur dataset provides a diverse benchmark for evaluating deep learning denoising methods, and our stitching method provides a solution to GPU memory constraints encountered when processing large images. Among the evaluated deep learning models, the supervised transformer-based model had the highest denoising performance but required the longest training time.
Fan, B.; Bilodeau, A.; Beaupre, F.; Wiesner, T.; Gagne, C.; Lavoie-Cardinal, F.; Hlozek, R.
Show abstract
SignificanceFluorescence-based Ca2+-imaging is a powerful tool for studying localized neuronal activity, including miniature Synaptic Calcium Transients, providing real-time insights into synaptic activity. These transients induce only subtle changes in the fluorescence signal, often barely above baseline, which poses a significant challenge for automated synaptic transient detection and segmentation. AimDetecting astronomical transients similarly requires efficient algorithms that will remain robust over a large field of view with varying noise properties. We leverage techniques used in astronomical transient detection for miniature Synaptic Calcium Transient detection in fluorescence microscopy. ApproachWe present Astro-BEATS, an automatic miniature Synaptic Calcium Transient segmentation algorithm that incorporates image estimation and source-finding techniques used in astronomy and designed for Ca2+-imaging videos. Astro-BEATS uses the Rolling Hough Transform filament detector to construct an estimate of the expected (transient-free) fluorescence signal of both the dendritic foreground and the background. Subtracting this baseline signal yields difference images displaying transient signals. We use Density-Based Spatial Clustering of Applications with Noise to find sources clustered in spatial and temporal space. ResultsAstro-BEATS outperforms current threshold-based approaches for synaptic Ca2+ transient detection and segmentation. The produced segmentation masks can be used to train a supervised deep learning algorithm for improved synaptic Ca2+ transient detection in Ca2+-imaging data. The speed of Astro-BEATS and its applicability to previously unseen datasets without re-optimization makes it particularly useful for generating training datasets for deep learning-based approaches. ConclusionAstro-BEATS greatly reduces the time needed for the annotation of synaptic Ca2+ transient and removes the significant overhead of human expert annotation, enabling consistent analysis of new Ca2+-imaging datasets.
Bright, M.; Mi, X.; Duarte, D.; Carey, E.; Lyu, B.; Wang, Y.; Nimmerjahn, A.; Yu, G.
Show abstract
BackgroundAdvanced biological imaging analysis platforms such as Activity Quantification and Analysis (AQuA2) enable accurate spatiotemporal activity analysis across diverse cell populations within many species. These tools are increasingly important for investigating cellular signaling dynamics and behavior. However, despite advances in the accuracy and species capability of AQuA2, it remains computationally demanding for analysis of long time-series datasets and requires all users to maintain a MATLAB license, which may limit accessibility and large-scale deployment. ResultsTo address these limitations, we have designed and made available AQuA2-Cloud, a portable software stack and web platform developed as an improvement and further evolution of AQuA2. This container-deployable system permits multi-user cloud-based high accuracy activity quantification with intuitive workflows, export of analysis data and project files, and comparable processing times. The platform offers integrated features such as in-browser analysis control interfaces, asynchronous program state control, multiple users and user management, support for unreliable connections, file uploading and downloading via web browsers and File Transfer Protocol, and centralized organization of analysis output. ConclusionAQuA2-Cloud constitutes a cloud-native solution for laboratories or research groups seeking to centralize analysis of spatiotemporal biological imaging datasets while reducing software installation and licensing barriers for end users. The platform enables researchers with minimal technical expertise to perform advanced bioimaging analysis through standard web browsers while maintaining the analytical capabilities of AQuA2. AQuA2-Cloud source code, deployment procedures, and documentation are freely available at (https://github.com/yu-lab-vt/AQuA2-Cloud).
Banerjee, T.; Abubaker-Sharif, B.; Devreotes, P. N.; Iglesias, P. A.
Show abstract
SummaryThe plasma membrane and accompanying cortex serve as one of the major hubs of the signal transduction and cytoskeletal activities that collectively regulate numerous cell physiological processes such as migration, polarity, macropinocytosis, phagocytosis, cytokinesis, etc. Yet, dynamically tracking membrane-cortex associated protein or lipid kinetics over time from live-cell image series remains a challenging task, primarily due to the difficulty of accurately extracting and aligning the cell boundary between consecutive frames, as the cell continuously deforms and moves. Here, we present Membrane Kymograph Generator, a cross-platform software that accepts multichannel time-lapse live-cell fluorescent imaging datasets as input and automates the cumbersome, heuristic process of boundary tracking, inter-frame alignment, and intensity sampling along the boundary. The software implements a rotational offset minimization algorithm that circularly aligns boundaries across consecutive frames by exhaustively searching for the optimal angular shift that minimizes point-to-point distances, while handling variations in boundary point counts due to cell shape changes. The software outputs kymographs that represent spatiotemporal dynamics of different membrane-associated proteins or biosensors, allows users to fine-tune visualization parameters through an interactive interface, and provides built-in correlation analysis tools for multi-channel datasets. Furthermore, the software allows advanced programmatic usage for batch processing and further analysis via a native API. Our validation tests demonstrated that the Membrane Kymograph Generator can be used to accurately track, visualize, and quantitate the spatial kinetics of a wide array of membrane proteins and lipid biosensors over extended time periods, in a variety of cell types, including Dictyostelium amoeba, human neutrophils, mouse macrophages, and different mammalian cancer cells. The GUI-based software is user-friendly, does not require any technical expertise from users, and significantly reduces the manual effort and time required for kymograph generation and downstream analysis, while ensuring high accuracy and reproducibility. Availability and ImplementationMembrane Kymograph Generator is a free and open-source software, licensed under GNU General Public License 3.0 or later. This software is cross-platform: It can be graphically installed on both x86-64 and AArch64/ARM64 computers, running either Windows, macOS, or any standard Linux distribution. The software is distributed as single installer files (and portable executables) targeting specific hardware architectures and operating systems, and hence, it can be installed natively without any dependency resolution. The source code, detailed documentation, specific installers, portable binaries, and test data are freely available at https://github.com/tatsatb/membrane-kymograph-generator. Additionally, since the software is written in Python, it can also be installed inside any Python environment using PIP package manager (package ID: https://pypi.org/project/membrane-kymograph) and can also be interacted via a built-in Python API.
Ahmadi, M.; Wagner, R.; Bekeschus, S.; Becker, M. M.
Show abstract
Bioimaging experiments in plasma medicine generate complex datasets that go beyond conventional imaging studies by combining microscopy data with heterogeneous metadata from biological experiments and highly parameterized gas plasma treatments. Plasma exposure conditions, such as device configuration, gas composition, and treatment conditions, are critical determinants of biological outcome, yet they are rarely captured in a standardized, machine-readable, and reusable manner. To address this gap, we present a research data management (RDM) workflow that operationalizes the Findable, Accessible, Interoperable, and Reusable (FAIR) principles across the bioimaging data lifecycle in plasma medicine. The workflow is implemented as a structured pipeline integrating open-source tools, including OMERO for image data management, eLabFTW as an electronic laboratory notebook, Adamant for schema-driven metadata collection, and Micro-Meta App for standardized documentation of microscopy acquisition settings that are connected via programming interfaces to enable persistent linkage of metadata to image datasets using standardized annotations. The workflow is documented in a reproducible tutorial with an open-source Python Jupyter notebook hosted on GitHub. By integrating plasma treatment metadata with imaging data, this approach improves reproducibility, cross-study comparability, and data reuse in plasma medicine research.
Shtengel, D.; Shtengel, G.; Xu, C. S.; Hess, H. F.
Show abstract
Electron Microscopy (EM) is widely used in many scientific fields, particularly in life sciences, offering high-resolution information on the ultrastructure of biological organisms. Accurate characterization of EM image quality is important for assessing the EM tool performance, in addition to sample preparation protocol, imaging conditions, etc. This paper provides an overview of tools we developed as plugins for the popular image processing package Fiji (ImageJ) (1). These tools include signal-to-noise ratio analysis, contrast evaluation, and resolution analysis, as well as the capability to import images acquired on custom FIB-SEM instruments (2). We have also made these tools available in Python, with both versions available on GitHub.
Miller, D. J.; Gratton, B.; LeBlanc, Z.; Kaas, J. H.
Show abstract
IntroductionSupervised statistical learning for cell-level segmentation and morphometry in optical microscopy is limited less by algorithmic capacity than by the scarcity of reliable, expert-validated ground truth. In comparative neuroscience and quantitative histology, where classical stains such as Nissls method remain the primary means to study cellular morphology, this bottleneck is acute: manual annotation is expensive, subject to individual bias, and rarely performed at the scale or consistency that computational approaches demand. No existing platform integrates a stain-specific bioimage segmentation protocol, a structured multi-annotator workflow, and consensus-based quality control into a single pipeline from image ingestion to machine-readable training data. MethodsWe present Anatolution, an open-source, web-based platform designed to address the gap of quality annotations at https://anatolution.herokuapp.com/public-tool/. Anatolution organizes microscopy images, including 2D arrays or 3D volumes, into project workspaces where multiple annotators independently label cellular structures against a shared computer vision catalogue. This design enables systematic inter-rater and intra-rater reliability assessment, with consensus derived from agreement across annotators rather than from any single experts judgment. The platform enables the export of aggregated labels or annotation datasets for downstream statistical learning methods. We describe the systems architecture, its Nissl-specific segmentation pipeline, the consensus annotation workflow, and validation of inter-rater reliability. ConclusionAcross 20+ histological annotation containers annotated by up to 15 independent raters, consensus boundary agreement increased monotonically with annotator count, reaching a median Dice of 0.79 against the full-rater reference at seven annotators, with top-tier containers achieving leave-one-out ceiling values of 0.621-0.769 for cell-body segmentation. The segmentation pipeline provided effective spatial anchoring, with 88% of consensus-annotated polygons containing at least one algorithmically detected seed. Anatolution provides open-source infrastructure for producing consensus-validated training data from classical histological preparations, addressing the primary bottleneck limiting supervised learning for cell-level morphometry.
Zhao, G.
Show abstract
Cell painting assays generate high-dimensional, multi-channel imaging data that enable systematic characterization of cellular phenotypes. Increasingly, such assays are performed in longitudinal settings and under chronic perturbations, introducing additional challenges related to imaging variability, focus-field heterogeneity, and consistency across time points. Existing analysis workflows often require substantial manual adaptation to handle these complexities, limiting scalability and reproducibility. In this paper, we propose SCALE (Stable Cell painting Analysis for Longitudinal Experiments), an integrated, end-to-end analysis pipeline designed for robust longitudinal analysis of cell painting data. The pipeline combines nucleus-centered segmentation, automated quality control, feature extraction, and signal aggregation within a modular and configurable framework. Once assay-specific configurations are specified, the pipeline executes in a fully automated manner from raw images to downstream summary statistics and analysis-ready outputs. We demonstrate the utility of the pipeline using a chronic radiation exposure cell painting dataset, illustrating its ability to support consistent longitudinal comparisons across conditions and time points.
Letort, G.; Valon, L.; Michaut, A.; Cumming, T.; Xenard, L.; Phan, M.-S.; Dray, N.; Rueden, C. T.; Schweisguth, F.; Gros, J.; Bally-Cuif, L.; Tinevez, J.-Y.; Levayer, R.
Show abstract
Investigating single-cell dynamics and morphology in tissues and embryos requires highly accurate quantitative analysis of microscopy images. Despite significant advances in the field of bioimage analysis, even the most sophisticated segmentation and tracking algorithms inevitably produce errors (e.g. : over segmentation, missing objects, miss-connected objects). Although error rate may be small, their propagation throughout a time-lapse sequence has catastrophic effects on the accuracy of tracking and extraction of single cell parameters. Extracting single cell temporal information in the context of tissue/embryo requires thus expert curation to identify and correct segmentation errors. In the movies commonly used in developmental biology and stem cell research, both the number of imaged cells and the duration of recording are large, making this manual correction task extremely time-consuming. This has now become a major bottleneck in the fields of development, stem cell biology and bioimage analysis. We present here EpiCure (Epithelial Curation), a versatile tool designed to streamline and accelerate manual curation of segmentation and tracking in 2D movies of large epithelial tissues. EpiCure uses temporal information and morphometric parameters to automatically identify segmentation and tracking errors and provides user-friendly tools to correct them. It focuses on ergonomics and offers several visualization options to help navigating in movies of tissue covering a large number of cells, speeding up the detection of errors and their curation. EpiCure is highly interoperable and supports input from a wide range of segmentation tools. It also includes multiple export filters, enabling seamless integration with downstream analysis pipelines. In this paper, using movies from several animal models, we highlight the importance of curating cell segmentation and tracking for accurate downstream analysis, and demonstrate how EpiCure helps the curation process for extracting accurate single cell dynamics and cellular events detection, making it faster and amenable on large dataset.
Baraznenok, E.; Hsieh, H.-C.; Lan, L.; Konnick, E. Q.; Figiel, S.; Rao, S. R.; Woodcock, D. J.; Mills, I. G.; Hamdy, F.; Valk, J. E.; Carter, K. T.; Yu, M.; Paulson, T. G.; Dintzis, S.; Grady, W. M.; Liu, J. T. C.
Show abstract
Non-destructive 3D pathology methods have emerged in recent years with the potential to enhance standard 2D histopathology by greatly increasing the amount of tissue sampled by imaging and by providing volumetric morphological context. Another key advantage is that tissues remain intact, allowing re-embedding after imaging for potential long-term storage and future histological or molecular analyses. However, the impact of 3D pathology protocols on biomolecules -- including DNA, RNA, and proteins -- and their compatibility with downstream assays, has not been systematically evaluated. Here, we applied a previously optimized 3D pathology protocol -- involving deparaffinization, fluorescent H&E-analog staining, optical clearing, and open-top light-sheet microscopy -- to formalin-fixed paraffin-embedded (FFPE) specimens of breast, prostate, and head and neck cancer. Following the protocol, tissues were re-embedded in paraffin and compared with paired FFPE controls that did not undergo 3D pathology processing. DNA and RNA were extracted and subjected to quality assessments. Amplifiability was tested by PCR and reverse transcription quantitative PCR (RT-qPCR) of housekeeping genes. Although the results showed a slight decrease in the average yield and increased fragmentation of both DNA and RNA, amplifiability was largely preserved. Sanger sequencing of the PCR products confirmed accurate sequence determinations, while total RNA sequencing indicated that the global transcriptomic profile was largely unchanged. IHC staining of common biomarkers produced comparable signals, suggesting those proteins are well preserved after the 3D pathology workflow. These results demonstrate the feasibility of combining 3D pathology with downstream molecular applications.
Pohar, C.; Rekik, Y.; Phan, M. S.; Gallet, B.; Desroches-Castane, A.; Chevallet, M.; Tinevez, J.-Y.; Tillet, E.; Vigano, N.; Jouneau, P.-H.; Deniaud, A.
Show abstract
The liver has a complex architecture composed of millions of lobules. Within these lobules, hepatocytes, the main hepatic cells, are organized in rows separated by blood capillaries known as sinusoids. These capillaries are lined by liver sinusoidal endothelial cells (LSEC) that form a very specific fenestrated endothelium essential for the exchange of metabolites and proteins between the blood and hepatocytes. Alterations in the size and number of LSEC fenestrations are associated with the onset and the progression of various liver diseases. The analysis of liver architecture is thus of utmost importance for advancing our knowledge of liver ultrastructure and its alterations. Liver architecture has been studied since decades, mainly using 2D electron microscopy, and more recently using advanced super-resolution fluorescence microscopy. In recent years, volume electron microscopy techniques, including focused ion beam-scanning electron microscopy (FIB-SEM) progressed and nowadays enable the 3D reconstruction of biological ultrastructures down to nanometer resolution. However, the analysis of large volumes (e.g., several tens of {micro}m3) remains challenging due to various constraints in the segmentation of large datasets. In the current study, we developed a workflow to semi-automatically segment hepatic sinusoids from FIB-SEM mice liver datasets using the CNN-based (convolutional neural network) tool known as "nnU-Net", after fine-tuning a ground truth model. We also implemented tools for semi-automatic quantification of LSEC fenestrae diameters and sinusoid porosity from segmented datasets. This workflow enabled us to compare the distribution of LSEC fenestrae diameters in wild-type versus Bmp9-deleted mice, a hepatic factor known to be involved in fenestration maintenance. Our results confirm the importance of BMP9 for LSEC differentiation. Therefore, the developed methodology represents a valuable tool for characterizing the fenestrated endothelium under various physiological and pathological conditions.
Daul, C.; Tournier, P.; Habib, S. J.
Show abstract
Quantitative organelle analysis is highly sensitive to image-processing choices, limiting reproducibility across microscopy studies. Here, we systematically compare automated, interactive machine learning, and deep learning-based pipelines for lipid droplet and mitochondrial quantification in live human osteosarcoma cells imaged by fluorescence microscopy and label-free holotomography. Using standardized downstream feature extraction, we evaluated script-based workflows (Fiji, Python), a modular platform (CellProfiler), interactive machine learning (ilastik), and pretrained deep learning models. Lipid droplet segmentation was qualitatively consistent across approaches; however, droplet counts, and size distributions varied substantially between pipelines and imaging modalities, with ilastik reducing background-driven detections and improving cross-modality agreement. In contrast, mitochondrial quantification proved highly sensitive to segmentation and skeletonization choices, particularly in holotomography where global intensity-threshold-based methods failed to capture network structure. Based on these cross-pipeline comparisons, we demonstrate how organelle- and modality-specific benchmarking can guide pipeline selection, illustrated by the analysis of metabolic perturbations affecting lipid droplets and mitochondria. Together, these results highlight modality- and morphology-dependent limitations in common analysis pipelines and provide practical guidance for selecting robust, reproducible strategies for quantitative organelle imaging.
Zhang, X.; Liu, Q.
Show abstract
Radiogenomics offers a promising non-invasive approach for characterizing breast cancer (BC), yet its progress is often limited by the scarcity of cohorts containing matched imaging and multi-omics data. Recent advances in generative AI have enabled the synthesis of imaging phenotypes from genomic features, but prior work has focused on the combined influence of all genomic signals rather than isolating the effects of specific biological pathways. In this study, we introduce a perturbation-based radiogenomic framework that integrates multi-omics meta-genes with a conditional generative adversarial network (cGAN) to examine how pathway-level alterations influence synthetic BC MRI phenotypes. Seventeen meta-genes derived from Bayesian Tensor Factorization were perturbed at three levels (overexpression, base case, knockout) and synthetic DCE-MRI volumes were generated for each condition. Radiomic features were extracted using MedSAM-guided segmentation and PyRadiomics, followed by statistical evaluation using one-way ANOVA and Tukey post-hoc testing. Among the 17 pathways analyzed, only two meta-genes, representing cell cycle regulation and steroid hormone biosynthesis, produced significant and biologically interpretable changes in tumor size, heterogeneity, and textural patterns. These findings show that computational perturbation can uncover pathway-specific imaging signatures and offer mechanistic insights that complement traditional radiogenomics and explainable AI approaches. This work demonstrates the potential of perturbation-driven generative models to advance precision imaging genomics in BC.
Leyva, A.; Niazi, M. K. K.
Show abstract
There have been no systematic evaluations of purely spectral models for digital pathology tasks. We implemented and benchmarked four pipelines: binary classification on the BreaKHis dataset, multi-class region classification in glioblastoma, spatial transcriptomics, and denoising on Visium 10x. Across all tasks, extensive cross-validation and grouped splits showed that purely spectral models did not improve performance over CNN-only baselines, but offer useful complementary tools for interpretability and processing. Denoising showed strong performance that proves utility in data-scarce or heterogeneous image environments. Equivalence testing confirms that spectral and CNN model performances fall outside {+/-}3% AUC. Fusion models between CNNs and spectral models show higher balanced accuracy. Spectral models failed to generalize across spatial transcriptomics tasks, with low correlation despite stable training loss. These findings represent a systematic negative result: despite their theoretical richness, spectral geometric features and SNO embeddings prove to be complementary features for WSI classification or segmentation. Reporting such outcomes is essential to establish empirical boundaries for spectral methods and to encourage future work on conditions or data modalities where these approaches may hold greater promise.
Schneider, F.; Trinh, L. A.; Fraser, S. E.
Show abstract
Fluorescent reporters such as fluorescent proteins or chemigenetic indicators are indispensable tools for studying biological processes using light microscopy. Choosing an appropriate fluorescent tag is a crucial step in experimental design not only for imaging but also for quantitative measurements such as fluorescence fluctuation spectroscopy. Two key parameters should be considered: Fluorescent brightness and photo-bleaching. Change to fluorescence intensity due to photobleaching is relatively easy to assess in different biological environments, while brightness is more elusive. Here, we develop and employ a fluorescence correlation spectroscopy (FCS) based excitation scan assay that determines fluorescent protein performance and validate it in tissue culture and zebrafish embryos. We employ our FCS pipeline to compare a set of 10 established fluorescent proteins as well as HALO and SNAP tags for both cellular imaging and measurements of diffusion dynamics with FCS. We show that mNeonGreen outperforms mEGFP in tissue culture and zebrafish embryos. We also compare StayGold variants against other green fluorescent proteins and chemigenetic reporters in tissue culture. Overall, we present a broadly applicable approach for determining fluorescent reporter brightness in the living system of interest.
Lüthi, J.; Cerrone, L.; Comparin, T.; Hess, M.; Hornbachner, R.; Tschan, A.; Glasner de Medeiros, G. Q.; Repina, N. A.; Cantoni, L. K.; Steffen, F. D.; Bourquin, J.-P.; Liberali, P.; Pelkmans, L.; Uhlmann, V.
Show abstract
The rapid growth in microscopy data volume, dimensionality, and diversity urgently calls for scalable and reproducible analysis frameworks. While efforts on the open OME-Zarr format have helped standardize the storage of large microscopy datasets, solutions for standardized processing are still lacking. Here, we introduce two complementary contributions to address this gap: 1) the Fractal task specification, defining OME-Zarr processing units that can interoperate across computational environments and workflow engines, and 2) the Fractal platform, using this specification to enable scalable and modular OME-Zarr-native analysis workflows. We demonstrate their use across diverse biological research data, including terabyte-scale multiplexed, volumetric, and time-lapse imaging. In a clinical setting, we show that Fractal workflows achieve near-identical quantification of millions of cells across independent deployments, demonstrating the reproducibility required for translational applications. With its growing community of contributors, the Fractal ecosystem provides a foundation for FAIR microscopy image analysis relying on open file formats.
Abbasi, H.; Ettema, L.; van Elk, R.; Eskes, M.; Doukas, M.; Koppes, S. A.; Keereweer, S.; Menzel, M.
Show abstract
Mapping peritumoral collagen fiber directionality in solid tumors may assist in determining cancer progression and support more personalized prognoses. However, existing microscopy techniques are often limited by a restricted field of view, high cost, or incompatibility with paraffin-treated tissues. Computational scattered light imaging (ComSLI) is a cost-effective whole-slide microscopy technique that reveals fiber orientations independent of sample preparation. Using glioma, colorectal, and head and neck cancer samples, we show for the first time that ComSLI maps fiber orientations in paraffin-treated tumor tissues, visualizes tumor growth pathways and desmoplastic reactions, and allows the study of collagen orientations relative to tumor boundaries.
Joca, H.; Silva, P. A.; Santos, J.; Dias, E.; Barbosa, T. P.; Degaki, K.; Morales, R.; Terra, M.; Rabelo, R. S.; Cardoso, M. B.; Saito, A.; Avelino, T. M.
Show abstract
Conventional two-dimensional (2D) histology relies upon destructive sample preparation and stereological estimation, frequently leading to sampling bias and loss of critical spatial context required for understanding renal structure relationships. Here, we detail a novel pipeline for high-resolution 3D histology of ex vivo murine kidneys using X-ray micro-computed tomography (micro-CT) at the high flux of a synchrotron light source, the architecture of the nephron and associated microvasculature necessitates three-dimensional (3D) analysis to accurately characterize its complexity. Soft-tissue contrast was optimized through an established phosphotungstic acid (PTA) staining protocol, enabling robust mapping of macro and microstructures via absorption contrast. Multi-scale imaging was performed, providing whole-organ context at resolutions around 3 m and achieving sub-micron detail (down to 400 nm) in targeted regions of interest (ROI) of the renal cortex. Utilizing machine learning segmentation pipelines optimized for large volumetric datasets, we extracted crucial 3D quantitative morphometric data. The results presented herein demonstrate accuracy and morphological insight achievable through synchrotron-based 3D imaging, establishing a robust method for quantitative preclinical research.